这项工作提出了一种新的方法,可以使用有效的鸟类视图表示和卷积神经网络在高速公路场景中预测车辆轨迹。使用基本的视觉表示,很容易将车辆位置,运动历史,道路配置和车辆相互作用轻松包含在预测模型中。 U-NET模型已被选为预测内核,以使用图像到图像回归方法生成场景的未来视觉表示。已经实施了一种方法来从生成的图形表示中提取车辆位置以实现子像素分辨率。该方法已通过预防数据集(一个板载传感器数据集)进行了培训和评估。已经评估了不同的网络配置和场景表示。这项研究发现,使用线性终端层和车辆的高斯表示,具有6个深度水平的U-NET是最佳性能配置。发现使用车道标记不会改善预测性能。平均预测误差为0.47和0.38米,对于纵向和横向坐标的最终预测误差分别为0.76和0.53米,预测轨迹长度为2.0秒。与基线方法相比,预测误差低至50%。
translated by 谷歌翻译
与其他技术(例如电感回路,雷达或激光器)相比,使用摄像头进行车速测量的成本效益要高得多。但是,由于相机的固有局限性提供准确的范围估计值,因此准确的速度测量仍然是一个挑战。此外,基于经典的视觉方法对相机和道路之间的外部校准非常敏感。在这种情况下,使用数据驱动的方法是一种有趣的选择。但是,数据收集需要一个复杂且昂贵的设置,以在与高精度速度传感器同步的相机中录制视频,以生成地面真相速度值。最近已经证明,使用驾驶模拟器(例如Carla)可以用作生成大型合成数据集的强大替代方案,以实现对单个摄像机的车辆速度估算的应用。在本文中,我们在不同的虚拟位置和不同的外部参数中使用多个摄像机研究相同的问题。我们解决了复杂的3D-CNN体系结构是否能够使用单个模型隐式学习视图速度的问题,或者特定于视图的模型是否更合适。结果非常有前途,因为它们表明具有来自多个视图的数据报告的单个模型比摄像机特异性模型更好地准确性,从而铺平了迈向视图的车辆速度测量系统。
translated by 谷歌翻译
由于需要快速原型制作和广泛的测试,模拟在自主驾驶中的作用变得越来越重要。基于物理的模拟使用涉及多个利益和优势,以合理的成本消除了对原型,驱动因素和脆弱道路使用者的风险。但是,有两个主要局限性。首先,众所周知的现实差距是指现实与模拟之间的差异,这阻止了模拟自主驾驶体验实现有效的现实性能。其次,缺乏有关真实代理商的行为的经验知识,包括备用驾驶员或乘客以及其他道路使用者,例如车辆,行人或骑自行车的人。代理仿真通常是根据实际数据进行确定性,随机概率或生成的预编程的,但它不代表与特定模拟方案相互作用的真实试剂的行为。在本文中,我们提出了一个初步框架,以实现真实试剂与模拟环境(包括自动驾驶汽车)之间的实时互动,并从多个视图中从模拟传感器数据中生成合成序列,这些视图可用于培训依赖行为模型的预测系统。我们的方法将沉浸式的虚拟现实和人类运动捕获系统与Carla模拟器进行自主驾驶。我们描述了提出的硬件和软件体系结构,并讨论所谓的行为差距或存在。我们提出了支持这种方法的潜力并讨论未来步骤的初步但有希望的结果。
translated by 谷歌翻译
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
translated by 谷歌翻译
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
translated by 谷歌翻译
We present edBB-Demo, a demonstrator of an AI-powered research platform for student monitoring in remote education. The edBB platform aims to study the challenges associated to user recognition and behavior understanding in digital platforms. This platform has been developed for data collection, acquiring signals from a variety of sensors including keyboard, mouse, webcam, microphone, smartwatch, and an Electroencephalography band. The information captured from the sensors during the student sessions is modelled in a multimodal learning framework. The demonstrator includes: i) Biometric user authentication in an unsupervised environment; ii) Human action recognition based on remote video analysis; iii) Heart rate estimation from webcam video; and iv) Attention level estimation from facial expression analysis.
translated by 谷歌翻译
This paper presents an automatic approach to creating taxonomies of technical terms based on the Cooperative Patent Classification (CPC). The resulting taxonomy contains about 170k nodes in 9 separate technological branches and is freely available. We also show that a Text-to-Text Transfer Transformer (T5) model can be fine-tuned to generate hypernyms and hyponyms with relatively high precision, confirming the manually assessed quality of the resource. The T5 model opens the taxonomy to any new technological terms for which a hypernym can be generated, thus making the resource updateable with new terms, an essential feature for the constantly evolving field of technological terminology.
translated by 谷歌翻译
Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.
translated by 谷歌翻译
In this work, we propose a framework relying solely on chat-based customer support (CS) interactions for predicting the recommendation decision of individual users. For our case study, we analyzed a total number of 16.4k users and 48.7k customer support conversations within the financial vertical of a large e-commerce company in Latin America. Consequently, our main contributions and objectives are to use Natural Language Processing (NLP) to assess and predict the recommendation behavior where, in addition to using static sentiment analysis, we exploit the predictive power of each user's sentiment dynamics. Our results show that, with respective feature interpretability, it is possible to predict the likelihood of a user to recommend a product or service, based solely on the message-wise sentiment evolution of their CS conversations in a fully automated way.
translated by 谷歌翻译
Abbreviations present a significant challenge for NLP systems because they cause tokenization and out-of-vocabulary errors. They can also make the text less readable, especially in reference printed books, where they are extensively used. Abbreviations are especially problematic in low-resource settings, where systems are less robust to begin with. In this paper, we propose a new method for addressing the problems caused by a high density of domain-specific abbreviations in a text. We apply this method to the case of a Slovenian biographical lexicon and evaluate it on a newly developed gold-standard dataset of 51 Slovenian biographies. Our abbreviation identification method performs significantly better than commonly used ad-hoc solutions, especially at identifying unseen abbreviations. We also propose and present the results of a method for expanding the identified abbreviations in context.
translated by 谷歌翻译